skip to main content


Search for: All records

Creators/Authors contains: "Molinaro, Dean D."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Estimating human joint moments using wearable sensors has utility for personalized health monitoring and generalized exoskeleton control. Data-driven models have potential to map wearable sensor data to human joint moments, even with a reduced sensor suite and without subject-specific calibration. In this study, we quantified the RMSE and R 2 of a temporal convolutional network (TCN), trained to estimate human hip moments in the sagittal plane using exoskeleton sensor data (i.e., a hip encoder and thigh- and pelvis-mounted inertial measurement units). We conducted three analyses in which we iteratively retrained the network while: 1) varying the input sequence length of the model, 2) incorporating noncausal data into the input sequence, thus delaying the network estimates, and 3) time shifting the labels to train the model to anticipate (i.e., predict) human hip moments. We found that 930 ms of causal input data maintained model performance while minimizing input sequence length (validation RMSE and R 2 of 0.141±0.014 Nm/kg and 0.883±0.025, respectively). Further, delaying the model estimate by up to 200 ms significantly improved model performance compared to the best causal estimators (p<0.05), improving estimator fidelity in use cases where delayed estimates are acceptable (e.g., in personalized health monitoring or diagnoses). Finally, we found that anticipating hip moments further in time linearly increased model RMSE and decreased R 2 (p<0.05); however, performance remained strong (R 2 >0.85) when predicting up to 200 ms ahead. 
    more » « less
  2. Autonomous lower-limb exoskeletons must modulate assistance based on locomotion mode (e.g., ramp or stair ascent) to adapt to the corresponding changes in human biological joint dynamics. However, current mode classification strategies for exoskeletons often require user-specific tuning, have a slow update rate, and rely on additional sensors outside of the exoskeleton sensor suite. In this study, we introduce a deep convolutional neural network-based locomotion mode classifier for hip exoskeleton applications using an open-source gait biomechanics dataset with various wearable sensors. Our approach removed the limitations of previous systems as it is 1) subject-independent (i.e., no user-specific data), 2) capable of continuously classifying for smooth and seamless mode transitions, and 3) only utilizes minimal wearable sensors native to a conventional hip exoskeleton. We optimized our model, based on several important factors contributing to overall performance, such as transition label timing, model architecture, and sensor placement, which provides a holistic understanding of mode classifier design. Our optimized DL model showed a 3.13% classification error (steady-state: 0.80 ± 0.38% and transitional: 6.49 ± 1.42%), outperforming other machine learning-based benchmarks commonly practiced in the field (p<0.05). Furthermore, our multi-modal analysis indicated that our model can maintain high performance in different settings such as unseen slopes on stairs or ramps. Thus, our study presents a novel locomotion mode framework, capable of advancing robotic exoskeleton applications toward assisting community ambulation. 
    more » « less
  3. Detection of the user’s walking is a critical part of exoskeleton technology for the full automation of smooth and seamless assistance during movement transitions. Researchers have taken several approaches in developing a walk detection system by using different kinds of sensors; however, only a few solutions currently exist which can detect these transitions using only the sensors embedded on a robotic hip exoskeleton (i.e., hip encoders and a trunk IMU), which is a critical consideration for implementing these systems in-the-loop of a hip exoskeleton controller. As a solution, we explored and developed two walk detection models that implemented a finite state machine as the models switched between walking and standing states using two transition conditions: stand-to-walk and walk-to-stand. One of our models dynamically detected the user’s gait cycle using two hip encoders and an IMU; the other model only used the two hip encoders. Our models were developed using a publicly available dataset and were validated online using a wearable sensor suite that contains sensors commonly embedded on robotic hip exoskeletons. The two models were then compared with a foot contact estimation method, which served as a baseline for evaluating our models. The results of our online experiments validated the performance of our models, resulting in 274 ms and 507 ms delay time when using the HIP+IMU and HIP ONLY model, respectively. Therefore, the walk detection models established in our study achieve reliable performance under multiple locomotive contexts without the need for manual tuning or sensors additional to those commonly implemented on robotic hip exoskeletons. 
    more » « less
  4. Step length is a critical gait parameter that allows a quantitative assessment of gait asymmetry. Gait asymmetry can lead to many potential health threats such as joint degeneration, difficult balance control, and gait inefficiency. Therefore, accurate step length estimation is essential to understand gait asymmetry and provide appropriate clinical interventions or gait training programs. The conventional method for step length measurement relies on using foot-mounted inertial measurement units (IMUs). However, this may not be suitable for real-world applications due to sensor signal drift and the potential obtrusiveness of using distal sensors. To overcome this challenge, we propose a deep convolutional neural network-based step length estimation using only proximal wearable sensors (hip goniometer, trunk IMU, and thigh IMU) capable of generalizing to various walking speeds. To evaluate this approach, we utilized treadmill data collected from sixteen able-bodied subjects at different walking speeds. We tested our optimized model on the overground walking data. Our CNN model estimated the step length with an average mean absolute error of 2.89 ± 0.89 cm across all subjects and walking speeds. Since wearable sensors and CNN models are easily deployable in real-time, our study findings can provide personalized real-time step length monitoring in wearable assistive devices and gait training programs. 
    more » « less
  5. null (Ed.)
  6. null (Ed.)
  7. null (Ed.)